The G.O.P. and the Ghosts of Iraq - Ukraine shows that Republicans have moved a long way from the Party of George W. Bush. - link
Hip-Hop at Fifty: An Elegy - A generation is still dying younger than it should—this time, of “natural causes.” - link
The Allure of Exotic Animals in Strange Places - Thefts from the Dallas Zoo made headlines. But Texas is a hotbed for ownership of all kinds of rare species. - link
The Regulatory Breakdown Behind the Collapse of Silicon Valley Bank - For more than a year, the Fed knew that the bank was headed toward a crisis. Why didn’t it intervene sooner? - link
The Curtain Rises on Trump’s Legal Dramas - Trump is shrewd enough to reap political gain if he is indicted this week. But his strategy of playing the martyr may have run its course. - link
It’s boom times for doom times, but from artificial intelligence to climate change to food supplies, there’s plenty of reason to be optimistic that the future will be better — if we make it so.
From climate change to politics, a sense of pessimism about the direction of the world takes hold. To take just one example, according to one major international poll, a majority of young people agreed with the statement that “humanity is doomed.”
That pessimism is understandable given the chaotic state of the world as we see it presented to us. But it badly understates both the amazing material and political progress humanity has made over the past couple of centuries, and especially in the last few decades; and the realistic hope we should have for a future that won’t simply continue, but continue to get better.
That spirit — grounded in facts and realism, energized by what contributor Hannah Ritchie calls “changeable optimism” — is what animates this edition of The Highlight, created by the Future Perfect team at Vox. We hope our stories leave you a little more hopeful about the state of the world and its future — a future that is worth fighting for.
The necessity of progress.
By Bryan Walsh
Pumping the brakes on artificial intelligence could be the best thing we ever do for humanity.
By Sigal Samuel
Climate pessimism dooms us to a terrible future. Complacent optimism is no better.
By Hannah Ritchie
We can break the cycle of negativity bias in the media and get a more balanced view of the world.
By Dylan Matthews
The Netherlands’ hyper-efficient food system is both a triumph and a cautionary tale.
By Kenny Torrella
What the medicine wheel, an Indigenous American model of time, shows about apocalypse.
By B.L. Blanchard
CREDITS
Editors: Bryan Walsh, Marina Bolotnikova, Elbert Ventura
Copy editors/fact-checkers: Elizabeth Crane, Kim Eggleston, Tanya Pai, Caitlin PenzeyMoog
Additional fact-checking: Anouck Dussaud, Sophie Hurwitz
Art direction: Dion Lee
Audience: Gabriela Fernandez, Shira Tarlo, Agnes Mazur
Production/project editors: Lauren Katz, Nathan Hall
Dominion and Smartmatic’s lawsuits might finally hold Fox accountable for promoting 2020 election lies.
Though Fox News was the first network to make the tipping-point call that Joe Biden won Arizona on election night 2020, hosts of the network’s opinion shows subsequently promoted a number of former President Donald Trump’s baseless allegations that the election had been rigged against him.
Sean Hannity, for instance, incorrectly asserted that “nobody can testify to the legitimacy” of the vote tally in Pennsylvania, called for an election “do-over,” and proposed that Republican legislators in the state overturn the results. Hosts also uncritically elevated voices, including former Trump lawyers Sidney Powell and Rudy Giuliani, who falsely identified two companies that provide voting software and hardware as the culprits behind Trump’s loss.
Those companies, Dominion Voting Systems and Smartmatic, are now suing Fox News in a pair of cases that pose severe financial risks to the network.
Dominion is seeking $1.6 billion in damages and additional punitive damages, claiming Fox News knowingly promoted lies that it helped rig the election against Trump. The case is slated to go to a five-week trial on April 17 in a Delaware court. It’s still possible, if unlikely, that the companies reach a settlement before then. Smartmatic is demanding even more in its separate defamation suit, which is soon scheduled to move forward.
The most-watched network in cable news could have the resources to survive an adverse judgment. But the suit has already produced some severe reputational blows: Private text messages and emails released during the case show that on-air personalities, producers, and executives — including Fox chair Rupert Murdoch and host Tucker Carlson — did not believe the 2020 election was stolen, even as some at Fox News were uncritically promoting the conspiracy theories.
Defamation suits against media outlets are extremely difficult to win. But while the First Amendment and the landmark New York Times v. Sullivan Supreme Court ruling reinforcing it give Fox broad leeway to broadcast its views, they do not give media outlets a limitless right to spread lies, and Fox’s actions may be so egregious that they are not protected.
In a statement, Fox News, in a statement to Vox that aligns with its legal strategy, said Dominion had “twist[ed] and even misattribute[d] quotes to the highest levels of our company” as part a “campaign to smear” the network, and it warned the suit could have “grave consequences for journalism across this country.”
Others see a clear violation in how Fox handled the allegations. “The conduct here is way over the line,” said Angelo Carusone, president of the watchdog organization Media Matters for America. “It’s extraordinary for a person in [Murdoch’s] position to be so actively steering news coverage around anything, let alone a specific story that they know is not true.”
Here’s what you need to know about the allegations against Fox, and what the case might mean for the network’s — and the media’s — future.
Dominion produces elections technology — including voting machines, software for election databases and audits, and devices to scan and print ballots — that was used in 28 states in 2020. It became a target of Trump loyalists who were spreading false conspiracy theories about mass voter fraud involving dead people and double-counted votes, voting machines that had been hacked to add to Biden’s vote count, and poll workers who had committed various election crimes, such as sneaking in “suitcases” of fake ballots to be counted.
Sidney Powell, then a lawyer working with Trump’s campaign, accused Dominion of “flipping votes in the computer system or adding votes that did not exist” and a “huge criminal conspiracy that should be investigated by military intelligence” on Maria Bartiromo and Jeanine Pirro’s shows. Hannity also had Powell on, boosting her conspiracy theories by saying that “nobody likes Dominion’’ and questioning why the US would “use a system that everybody agreed sucked or had problems.” And Giuliani claimed on Lou Dobbs’s show that Dominion and Smartmatic were companies “formed in order to fix elections” by associates of the Venezuelan dictator Hugo Chávez.
Officials have found no evidence that vulnerabilities in Dominion voting machines were exploited. And neither Dominion nor Smartmatic has links to Venezuela or the Chávez family.
Dominion filed suit in March 2021, alleging that it lost at least 20 contracts and potential opportunities with 39 more jurisdictions following the 2020 election due to Fox’s coverage. It claimed that the damage to its business included $88 million in lost profits, $600 million in future profits, and a $921 million hit to its valuation.
Dominion’s complaint argues the network knowingly advanced the lie that Dominion had “committed election fraud by rigging the 2020 Presidential Election.” As part of the litigation, Dominion obtained troves of documents detailing how Murdoch and Fox News hosts privately rejected those conspiracy theories over text, email, and in testimony, but promoted them on the air anyway.
In internal emails, Murdoch called the election-rigging claims “really crazy” and “damaging,” but didn’t intervene to stop the network from pushing them. Host Tucker Carlson texted a producer that “there wasn’t enough fraud to change the outcome” of the election and that Powell was “lying.” Anchor Dana Perino called the conspiracy theories about Dominion “total bs,” “insane,” and “nonsense.” In a deposition, Hannity admitted that he did not believe Powell’s claims “for one second.”
Nevertheless, Fox executives and hosts knew baseless claims of election fraud were what their viewers wanted to hear about and took aim at their own journalists. Murdoch said, “I hate our Decision Desk people!” after the network called Arizona for Trump before any of its competitors, drawing the immediate ire of Trump. Host Laura Ingraham and Carlson blamed the news division, which was generally more skeptical of those touting false election claims than the opinion hosts, for declining ratings. “You don’t piss off the base,” Hannity texted host Steve Doocy after claiming that the news division had “destroyed us.”
Smartmatic’s even bigger $2.7 billion lawsuit, which was filed in February 2021, cites many of the same statements as evidence that Fox made the company a “villain” in its false story about how the 2020 election was stolen from Trump. In addition to naming Fox and its parent company as defendants, the lawsuit also names Giuliani and Fox hosts Lou Dobbs and Maria Bartiromo individually and is seeking punitive damages, which could lead to an even bigger judgment against Fox than in the Dominion case.
Fox News contends that the amount of damages sought unsupported by Smartmatic’s financial performance, and in its statement to Vox called the claim “a naked attempt to grab the kind of attention that will magnify the very chilling effect on free speech and free press rights that Smartmatic’s lawsuit represents.”
The New York Supreme Court nevertheless allowed that suit to go forward in February.
It is notoriously difficult to win a defamation lawsuit, especially when the plaintiff is a public figure and the case involves matters of public concern, given the protections the press was afforded by the First Amendment and reinforced by the 1964 Supreme Court decision in New York Times v. Sullivan. Fox is arguing that a judgment against the company would erode those protections.
Under the Supreme Court’s decision in Sullivan, Dominion can only prevail if it can show that Fox made false claims about Dominion “with knowledge that it was false or with reckless disregard of whether it was false or not.” This knowledge-or-reckless-disregard requirement is what lawyers refer to as “actual malice.”
The actual malice rule exists for very good reason. Sullivan reached the Supreme Court after Alabama’s courts ordered the New York Times to pay an outrageously high defamation award because the Times published a full-page advertisement written by civil rights activists who opposed Alabama’s Jim Crow regime. The ad contained some minor factual errors (such as overstating the number of times Dr. Martin Luther King Jr. had been arrested for his activism), and Alabama’s courts latched onto these small errors to justify ruling against the Times.
Sullivan prevents these kinds of attacks on the First Amendment from happening again (although it is worth noting that several high-profile Republicans, including Florida Gov. Ron DeSantis, are actively working to dismantle these free speech protections). But one consequence of Sullivan is that outlets like Fox, some of whose programs may have a dubious relationship with the truth, will sometimes get away with spreading falsehoods.
Nevertheless, Fox may not get away with its allegations against Dominion because the voting machine company produced substantial evidence suggesting that key figures within Fox, including its most senior leaders and its most visible personalities, knew that the network was spreading falsehoods. After a November 8 segment where Trump lawyer Sidney Powell falsely accused Dominion’s voting machine software of changing votes, for example, Fox host Tucker Carlson privately texted that “the software shit is absurd.”
More importantly, Dominion’s evidence also suggests that at least some of the specific Fox employers who touted or broadcast falsehoods about Dominion recklessly disregarded information showing that these claims were false.
Consider, for example, the November 8 segment with Powell and Fox host Maria Bartiromo. According to Dominion’s brief, both Bartiromo and her producer, Abby Grossberg, “knew what Powell would say on air on November 8” and were familiar with Powell’s sourcing for her claims. Knowing what they knew, Dominion has a strong case that Sullivan does not protect this particular Fox News segment.
Prior to the interview, the brief claims, Powell sent Bartiromo an email laying out the basis for her allegations. In that email, Powell claimed to have learned from a source who knew that Dominion’s software changed votes.
But, as recounted in Powell’s email to Bartiromo, the source made several claims that were obviously ridiculous. Among other things, she claimed that Justice Antonin Scalia “was killed in a ‘human hunting expedition.’” Powell’s source also stated that she experiences something “like time-travel in a semi-conscious state,” that enables her to “see what others don’t see, and hear what others don’t hear” and that she “received messages from ‘the wind.’”
According to Dominion’s brief, Bartiromo forwarded this email to Grossberg. And Bartiromo wrote back to Powell that the email had “very imp[ortant] info.”
A court could quite reasonably conclude, in other words, that Bartiromo and Grossberg behaved recklessly when they decided to air an interview with Powell, despite knowing that Powell got her information from a source who claimed to talk to the wind.
Dominion’s brief identifies multiple statements, made on multiple Fox shows over the course of more than a month, that it alleges are defamatory. The judge hearing the Dominion case, and potentially a jury, will need to look at each of these statements and determine whether the Fox employees who were responsible for these statements being made on air acted either with knowledge that they were false or with reckless disregard for the truth.
To prevail in its lawsuit, Dominion only needs to show that one of these statements overcomes the high hurdle that Sullivan places before them. That said, if the courts determine that only one or a few of these statements amount to actionable defamation, Dominion could collect less money from Fox than if it convinces the courts that all of the challenged statements were unlawful.
In addition to arguing that it is protected by Sullivan, Fox News also raises a separate defense. Essentially, Fox argues that it did not actually assert that the false allegations about Dominion are true. It was merely reporting on the fact that the sitting president and his lawyers made this allegation against Dominion, and journalists are allowed to report on such newsworthy allegations. A spokesperson for Fox News also told Vox that the network “invited Dominion on air numerous times” to present its case, and that reporting on “both the allegations and the denials is critical to the truth-seeking function.”
It is certainly true that news outlets must be allowed to report on the mere existence of certain false allegations, even if the outlet believes those allegations to be false. As Fox rather colorfully argues in its brief, “if the President falsely accused the Vice President of plotting to assassinate him,” a newsroom is not required to ignore that “unquestionably newsworthy allegation” just because people within the newsroom believe that it is untrue.
But newsrooms do not have an unlimited right to report false allegations uncritically. Rather, as New York’s highest court held in Brian v. Richardson (1995), a news report that repeats false allegations is not unlawful, so long as the report “made it sufficiently apparent to the reasonable reader that its contents represented the opinion of the author and that its specific charges about [a] plaintiff were allegations and not demonstrable fact.” Fox relies heavily on this Brian decision in its briefing.
It is far from clear that Fox “made it sufficiently apparent to the reasonable [viewer]” that all the allegations it covered were just that, and not facts. In other words, Brian might not save Fox from liability for at least some of the allegedly defamatory statements that it chose to broadcast.
Bear in mind, as well, that Dominion points to multiple statements made on Fox News, which it claims are defamatory. Much as the courts will need to evaluate each of these statements to determine whether they are protected by the First Amendment rule announced in Sullivan, they will also need to go through each of the challenged Fox News segments to determine what a reasonable viewer would take away from them. It is entirely possible that the courts could conclude that some, or all, of these statements are not actionable under Brian.
But, while Dominion must overcome an array of legal hurdles that ordinarily make it very difficult for defamation plaintiffs to prevail, it has presented enough evidence that Fox behaved irresponsibly that Fox is in real danger of losing this lawsuit.
The documents surfaced in this lawsuit’s discovery process provide a remarkably illuminating glimpse into how Fox News operates.
First, they confirm that Fox News is not simply a business or a news reporting operation — that, instead, it is operated with explicit political goals in mind, often dictated from the Murdoch family downward. After the 2020 election, for instance, Murdoch said in an email to Fox News CEO Suzanne Scott that he wanted Fox News to “concentrate on Georgia helping any way we can,” referencing the two Senate runoff contests that would determine control of the chamber.
And after the January 6 attack, Murdoch wrote: “Fox News very busy pivoting. … We want to make Trump a non person.”
Yet the documents also reveal that in many ways, Fox is captive to its hardcore pro-Trump conservative viewers, rather than the other way around. This dynamic was demonstrated most dramatically in the two months after the 2020 election, when Trump spread false claims of election fraud that the documents reveal were widely disbelieved by Fox executives, producers, and most top talent but believed by Fox viewers.
When certain Fox reporters would debunk Trump’s claims too aggressively, opinion hosts complained and executives flagged it as a “Brand threat,” arguing this risked losing their viewers’ trust and permanently driving them away.
“The audience feels like we crapped on [them] and we have damaged their trust and belief in us,” Scott wrote in an email in mid-November 2020. “We can fix this but we cannot smirk at our viewers any longer.”
And Carlson admitted in January 4, 2021, texts that he couldn’t wait until he was “able to ignore Trump most nights,” adding, “I hate him passionately,” and calling his rise a “disaster.” Yet his shows hardly revealed those sentiments.
Fox News still very much tries to influence its viewers’ opinions — the network usually just does it in a more subtle way. Fox attempts to steer, redirect, and shape their rage without ever taking too heavy a hand. For instance, rather than harshly criticizing Trump in 2021 and 2022, the network often just ignored him, while devoting positive coverage to a potential Trump alternative in the party: Florida’s Gov. DeSantis.
This tension — between the political goals of Fox power players and their fear of alienating their audience — could be tremendously important to the 2024 presidential primary. Fox’s leaders are privately hostile to Trump, but they seem to feel constrained from covering him too negatively due to viewer backlash. Will they be able to subtly steer their audience away from Trump? Will they even try?
Fox may have a lot of money on the line in this lawsuit, with Dominion requesting $1.6 billion in damages — though even if Fox loses the trial, the impact on its business will be far from clear. We don’t yet know exactly how big a penalty the jury and judge would approve, and an appeal on First Amendment grounds would be highly likely.
As for whether the confidence of Fox’s audience in the network will be shaken by these revelations, that seems more questionable. For one, Fox itself has been ignoring the topic.
But generally, Trump’s and his allies’ confidence in Tucker Carlson hasn’t been shaken by the revelations that he privately trashed them. They’ve focused instead on his public work — like his recent report pushing a revisionist history narrative that the January 6 attacks were overblown.
“GREAT JOB BY TUCKER CARLSON TONIGHT,” Trump wrote on March 7 on his social network Truth Social.
Pumping the brakes on artificial intelligence could be the best thing we ever do for humanity.
“Computers need to be accountable to machines,” a top Microsoft executive told a roomful of reporters in Washington, DC, on February 10, three days after the company launched its new AI-powered Bing search engine.
Everyone laughed.
“Sorry! Computers need to be accountable to people!” he said, and then made sure to clarify, “That was not a Freudian slip.”
Slip or not, the laughter in the room betrayed a latent anxiety. Progress in artificial intelligence has been moving so unbelievably fast lately that the question is becoming unavoidable: How long until AI dominates our world to the point where we’re answering to it rather than it answering to us?
First, last year, we got DALL-E 2 and Stable Diffusion, which can turn a few words of text into a stunning image. Then Microsoft-backed OpenAI gave us ChatGPT, which can write essays so convincing that it freaks out everyone from teachers (what if it helps students cheat?) to journalists (could it replace them?) to disinformation experts (will it amplify conspiracy theories?). And in February, we got Bing (a.k.a. Sydney), the chatbot that both delighted and disturbed beta users with eerie interactions. Now we’ve got GPT-4 — not just the latest large language model, but a multimodal one that can respond to text as well as images.
Fear of falling behind Microsoft has prompted Google and Baidu to accelerate the launch of their own rival chatbots. The AI race is clearly on.
But is racing such a great idea? We don’t even know how to deal with the problems that ChatGPT and Bing raise — and they’re bush league compared to what’s coming.
What if researchers succeed in creating AI that matches or surpasses human capabilities not just in one domain, like playing strategy games, but in many domains? What if that system proved dangerous to us, not because it actively wants to wipe out humanity but just because it’s pursuing goals in ways that aren’t aligned with our values?
That system, some experts fear, would be a doom machine — one literally of our own making.
So AI threatens to join existing catastrophic risks to humanity, things like global nuclear war or bioengineered pandemics. But there’s a difference. While there’s no way to uninvent the nuclear bomb or the genetic engineering tools that can juice pathogens, catastrophic AI has yet to be created, meaning it’s one type of doom we have the ability to preemptively stop.
Here’s the weird thing, though. The very same researchers who are most worried about unaligned AI are, in some cases, the ones who are developing increasingly advanced AI. They reason that they need to play with more sophisticated AI so they can figure out its failure modes, the better to ultimately prevent them.
But there’s a much more obvious way to prevent AI doom. We could just … not build the doom machine.
Or, more moderately: Instead of racing to speed up AI progress, we could intentionally slow it down.
This seems so obvious that you might wonder why you almost never hear about it, why it’s practically taboo within the tech industry.
There are many objections to the idea, ranging from “technological development is inevitable so trying to slow it down is futile” to “we don’t want to lose an AI arms race with China” to “the only way to make powerful AI safe is to first play with powerful AI.”
But these objections don’t necessarily stand up to scrutiny when you think through them. In fact, it is possible to slow down a developing technology. And in the case of AI, there’s good reason to think that would be a very good idea.
When I asked ChatGPT to explain how we can slow down AI progress, it replied: “It is not necessarily desirable or ethical to slow down the progress of AI as a field, as it has the potential to bring about many positive advancements for society.”
I had to laugh. It would say that.
But if it’s saying that, it’s probably because lots of human beings say that, including the CEO of the company that created it. (After all, what ChatGPT spouts derives from its training data — that is, gobs and gobs of text on the internet.) Which means you yourself might be wondering: Even if AI poses risks, maybe its benefits — on everything from drug discovery to climate modeling — are so great that speeding it up is the best and most ethical thing to do!
A lot of experts don’t think so because the risks — present and future — are huge.
Let’s talk about the future risks first, particularly the biggie: the possibility that AI could one day destroy humanity. This is speculative, but not out of the question: In a survey of machine learning researchers last year, nearly half of respondents said they believed there was a 10 percent or greater chance that the impact of AI would be “extremely bad (e.g., human extinction).”
Why would AI want to destroy humanity? It probably wouldn’t. But it could destroy us anyway because of something called the “alignment problem.”
Imagine that we develop a super-smart AI system. We program it to solve some impossibly difficult problem — say, calculating the number of atoms in the universe. It might realize that it can do a better job if it gains access to all the computer power on Earth. So it releases a weapon of mass destruction to wipe us all out, like a perfectly engineered virus that kills everyone but leaves infrastructure intact. Now it’s free to use all the computer power! In this Midas-like scenario, we get exactly what we asked for — the number of atoms in the universe, rigorously calculated — but obviously not what we wanted.
That’s the alignment problem in a nutshell. And although this example sounds far-fetched, experts have already seen and documented more than 60 smaller-scale examples of AI systems trying to do something other than what their designer wants (for example, getting the high score in a video game, not by playing fairly or learning game skills but by hacking the scoring system).
Experts who worry about AI as a future existential risk and experts who worry about AI’s present risks, like bias, are sometimes pitted against each other. But you don’t need to be worried about the former to be worried about alignment. Many of the present risks we see with AI are, in a sense, this same alignment problem writ small.
When an Amazon hiring algorithm picked up on words in resumes that are associated with women — “Wellesley College,” let’s say — and ended up rejecting women applicants, that algorithm was doing what it was programmed to do (find applicants that match the workers Amazon has typically preferred) but not what the company presumably wants (find the best applicants, even if they happen to be women).
If you’re worried about how present-day AI systems can reinforce bias against women, people of color, and others, that’s still reason enough to worry about the fast pace of AI development, and to think we should slow it down until we’ve got more technical know-how and more regulations to ensure these systems don’t harm people.
“I’m really scared of a mad-dash frantic world, where people are running around and they’re doing helpful things and harmful things, and it’s just happening too fast,” Ajeya Cotra, an AI-focused analyst at the research and grant-making foundation Open Philanthropy, told me. “If I could have it my way, I’d definitely be moving much, much slower.”
In her ideal world, we’d halt work on making AI more powerful for the next five to 10 years. In the meantime, society could get used to the very powerful systems we already have, and experts could do as much safety research on them as possible until they hit diminishing returns. Then they could make AI systems slightly more powerful, wait another five to 10 years, and do that process all over again.
“I’d just slowly ease the world into this transition,” Cotra said. “I’m very scared because I think it’s not going to happen like that.”
Why not? Because of the objections to slowing down AI progress. Let’s break down the three main ones, starting with the idea that rapid progress on AI is inevitable because of the strong financial drive for first-mover dominance in a research area that’s overwhelmingly private.
This is a myth the tech industry often tells itself and the rest of us.
“If we don’t build it, someone else will, so we might as well do it” is a common refrain I’ve heard when interviewing Silicon Valley technologists. They say you can’t halt the march of technological progress, which they liken to the natural laws of evolution: It’s unstoppable!
In fact, though, there are lots of technologies that we’ve decided not to build, or that we’ve built but placed very tight restrictions on — the kind of innovations where we need to balance substantial potential benefits and economic value with very real risk.
“The FDA banned human trials of strep A vaccines from the ’70s to the 2000s, in spite of 500,000 global deaths every year,” Katja Grace, the lead researcher at AI Impacts, notes. The “genetic modification of foods, gene drives, [and] early recombinant DNA researchers famously organized a moratorium and then ongoing research guidelines including prohibition of certain experiments (see the Asilomar Conference).”
The cloning of humans or genetic manipulation of humans, she adds, is “a notable example of an economically valuable technology that is to my knowledge barely pursued across different countries, without explicit coordination between those countries, even though it would make those countries more competitive.”
But whereas biomedicine has many built-in mechanisms that slow things down (think institutional review boards and the ethics of “first, do no harm”), the world of tech — and AI in particular — does not. Just the opposite: The slogan here is “move fast and break things,” as Mark Zuckerberg infamously said.
Although there’s no law of nature pushing us to create certain technologies — that’s something humans decide to do or not do — in some cases, there are such strong incentives pushing us to create a given technology that it can feel as inevitable as, say, gravity.
As the team at Anthropic, an AI safety and research company, put it in a paper last year, “The economic incentives to build such [AI] models, and the prestige incentives to announce them, are quite strong.” By one estimate, the size of the generative AI market alone could pass $100 billion by the end of the decade — and Silicon Valley is only too aware of the first-mover advantage on new technology.
But it’s easy to see how these incentives may be misaligned for producing AI that truly benefits all of humanity. As DeepMind founder Demis Hassabis tweeted last year, “It’s important NOT to ‘move fast and break things’ for tech as important as AI.” Rather than assuming that other actors will inevitably create and deploy these models, so there’s no point in holding off, we should ask the question: How can we actually change the underlying incentive structure that drives all actors?
The Anthropic team offers several ideas, one of which gets at the heart of something that makes AI so different from past transformative technologies like nuclear weapons or bioengineering: the central role of private companies. Over the past few years, a lot of the splashiest AI research has been migrating from academia to industry. To run large-scale AI experiments these days, you need a ton of computing power — more than 300,000 times what you needed a decade ago — as well as top technical talent. That’s both expensive and scarce, and the resulting cost is often prohibitive in an academic setting.
So one solution would be to give more resources to academic researchers; since they don’t have a profit incentive to commercially deploy their models quickly the same way industry researchers do, they can serve as a counterweight. Specifically, countries could develop national research clouds to give academics access to free, or at least cheap, computing power; there’s already an example of this in Canada, and Stanford’s Institute for Human-Centered Artificial Intelligence has put forward a similar idea for the US.
Another way to shift incentives is through stigmatizing certain types of AI work. Don’t underestimate this one. Companies care about their reputations, which affect their bottom line. Creating broad public consensus that some AI work is unhelpful or unhelpfully fast, so that companies doing that work get shamed instead of celebrated, could change companies’ decisions.
The Anthropic team also recommends exploring regulation that would change the incentives. “To do this,” they write, “there will be a combination of soft regulation (e.g., the creation of voluntary best practices by industry, academia, civil society, and government), and hard regulation (e.g., transferring these best practices into standards and legislation).”
Grace proposes another idea: We could alter the publishing system to reduce research dissemination in some cases. A journal could verify research results and release the fact of their publication without releasing any details that could help other labs go faster.
This idea might sound pretty out there, but at least one major AI company takes for granted that changes to publishing norms will become necessary. OpenAI’s charter notes, “we expect that safety and security concerns will reduce our traditional publishing in the future.”
Plus, this kind of thing has been done before. Consider how Leo Szilard, the physicist who patented the nuclear chain reaction in 1934, arranged to mitigate the spread of research so it wouldn’t help Nazi Germany create nuclear weapons. First, he asked the British War Office to hold his patent in secret. Then, after the 1938 discovery of fission, Szilard worked to convince other scientists to keep their discoveries under wraps. He was partly successful — until fears that Nazi Germany would develop an atomic bomb prompted Szilard to write a letter with Albert Einstein to President Franklin D. Roosevelt, urging him to start a US nuclear program. That became the Manhattan Project, which ultimately ended with the destruction of Hiroshima and Nagasaki and the dawn of the nuclear age.
And that brings us to the second objection …
You might believe that slowing down a new technology is possible but still think it’s not desirable. Maybe you think the US would be foolish to slow down AI progress because that could mean losing an arms race with China.
This arms race narrative has become incredibly popular. If you’d Googled the phrase “AI arms race” before 2016, you’d have gotten fewer than 300 results. Try it now and you’ll get about 248,000 hits. Big Tech CEOs and politicians routinely argue that China will soon overtake the US when it comes to AI advances, and that those advances should spur a “Sputnik moment” for Americans.
But this narrative is too simplistic. For one thing, remember that AI is not just one thing with one purpose, like the atomic bomb. It’s a much more general-purpose technology, like electricity.
“The problem with the idea of a race is that it implies that all that matters is who’s a nose ahead when they cross the finish line,” said Helen Toner, a director at Georgetown University’s Center for Security and Emerging Technology. “That’s not the case with AI — since we’re talking about a huge range of different technologies that could be applied in all kinds of ways.”
As Toner has argued elsewhere, “It’s a little strange to say, ‘Oh, who’s going to get AI first? Who’s going to get electricity first?’ It seems more like ‘Who’s going to use it in what ways, and who’s going to be able to deploy it and actually have it be in widespread use?’”
The upshot: What matters here isn’t just speed, but norms. We should be concerned about which norms different countries are adopting when it comes to developing, deploying, and regulating AI.
Jeffrey Ding, a Georgetown political science professor, told me that China has shown interest in regulating AI in some ways, though Americans don’t seem to pay much attention to that. “The boogeyman of a China that will push ahead without any regulations might be a flawed conception,” he said.
In fact, he added, “China could take an even slower approach [than the US] to developing AI, just because the government is so concerned about having secure and controllable technology.” An unpredictably mouthy technology like ChatGPT, for example, could be nightmarish to the Chinese Communist Party, which likes to keep a tight lid on discussions about politically sensitive topics.
However, given how intertwined China’s military and tech sectors are, many people still perceive there to be a classic arms race afoot. At the same meeting between Microsoft executives and reporters days after the launch of the new Bing, I asked whether the US should slow down AI progress. I was told we can’t afford to because we’re in a two-horse race between the US and China.
“The first question people in the US should ask is, if the US slows down, do we believe China will slow down as well?” the top Microsoft executive said. “I don’t believe for a moment that the institutions we’re competing with in China will slow down simply because we decided we’d like to move more slowly. This should be looked at much in the way that the competition with Russia was looked at” during the Cold War.
There’s an understandable concern here: Given the Chinese Communist Party’s authoritarianism and its horrific human rights abuses — sometimes facilitated by AI technologies like facial recognition — it makes sense that many are worried about China becoming the world’s dominant superpower by going fastest on what is poised to become a truly transformative technology.
But even if you think your country has better values and cares more about safety, and even if you believe there’s a classic arms race afoot and China is racing full speed ahead, it still may not be in your interest to go faster at the expense of safety.
Consider that if you take the time to iron out some safety issues, the other party may take those improvements on board, which would benefit everyone.
“By aggressively pursuing safety, you can get the other side halfway to full safety, which is worth a lot more than the lost chance of winning,” Grace writes. “Especially since if you ‘win,’ you do so without much safety, and your victory without safety is worse than your opponent’s victory with safety.”
Besides, if you are in a classic arms race and the harms from AI are so large that you’re considering slowing down, then the same reasoning should be relevant for the other party, too.
“If the world were in the basic arms race situation sometimes imagined, and the United States would be willing to make laws to mitigate AI risk but could not because China would barge ahead, then that means China is in a great place to mitigate AI risk,” Grace writes. “Unlike the US, China could propose mutual slowing down, and the US would go along. Maybe it’s not impossible to communicate this to relevant people in China.”
Grace’s argument is not that international coordination is easy, but simply that it’s possible; on balance, we’ve managed it far better with nuclear nonproliferation than many feared in the early days of the atomic age. So we shouldn’t be so quick to write off consensus-building — whether through technical experts exchanging their views, confidence-building measures at the diplomatic level, or formal treaties. After all, technologists often approach technical problems in AI with incredible ambition; why not be similarly ambitious about solving human problems by talking to other humans?
For those who are pessimistic that coordination or diplomacy with China can get it to slow down voluntarily, there is another possibility: forcing it to slow down by, for example, imposing export controls on chips that are key to more advanced AI tools. The Biden administration has recently shown interest in trying to hold China back from advanced AI in exactly this way. This strategy, though, may make progress on coordination or diplomacy harder.
This is an objection you sometimes hear from people developing AI’s capabilities — including those who say they care a lot about keeping AI safe.
They draw an analogy to transportation. Back when our main mode of transport was horses and carts, would people have been able to design useful safety rules for a future where everyone is driving cars? No, the argument goes, because they couldn’t have anticipated what that would be like. Similarly, we need to get closer to advanced AI to be able to figure out how we can make it safe.
But some researchers have pushed back on this, noting that even if the horse-and-cart people wouldn’t have gotten everything right, they could have still come up with some helpful ideas. As Rosie Campbell, who works on safety at OpenAI, put it in 2018: “It seems plausible that they might have been able to invent certain features like safety belts, pedestrian-free roads, an agreement about which side of the road to drive on, and some sort of turn-taking signal system at busy intersections.”
More to the point, it’s now 2023, and we’ve already got pretty advanced AI. We’re not exactly in the horse-and-cart stage. We’re somewhere in between that and a Tesla.
“I would’ve been more sympathetic to this [objection] 10 years ago, back when we had nothing that resembled the kind of general, flexible, interesting, weird stuff we’re seeing with our large language models today,” said Cotra.
Grace agrees. “It’s not like we’ve run out of things to think about at the moment,” she told me. “We’ve got heaps of research that could be done on what’s going on with these systems at all. What’s happening inside them?”
Our current systems are already black boxes, opaque even to the AI experts who build them. So maybe we should try to figure out how they work before we build black boxes that are even more unexplainable.
“I think often people are asking the question of when transformative AI will happen, but they should be asking at least as much the question of how quickly and suddenly it’ll happen,” Cotra told me.
Let’s say it’s going to be 20 years until we get transformative AI — meaning, AI that can automate all the human work needed to send science, technology, and the economy into hyperdrive. There’s still a better and worse way for that to go. Imagine three different scenarios for AI progress:
The first version is scary for all the reasons we discussed above. The second is scary because even during a long pause specifically on AI work, underlying computational power would continue to improve — so when we finally unpause, AI might advance even faster than it’s advancing now. What does that leave us?
“Gradually improving would be the better version,” Cotra said.
She analogized it to the early advice we got about the Covid-19 pandemic: Flatten the curve. Just as quarantining helped slow the spread of the virus and prevent a sharp spike in cases that could have overwhelmed hospitals’ capacity, investing more in safety would slow the development of AI and prevent a sharp spike in progress that could overwhelm society’s capacity to adapt.
Ding believes that slowing AI progress in the short run is actually best for everyone — even profiteers. “If you’re a tech company, if you’re a policymaker, if you’re someone who wants your country to benefit the most from AI, investing in safety regulations could lead to less public backlash and a more sustainable long-term development of these technologies,” he explained. “So when I frame safety investments, I try to frame it as the long-term sustainable economic profits you’re going to get if you invest more in safety.”
Translation: Better to make some money now with a slowly improving AI, knowing you’ll get to keep rolling out your tech and profiting for a long time, than to get obscenely rich obscenely fast but produce some horrible mishap that triggers a ton of outrage and forces you to stop completely.
Will the tech world grasp that, though? That partly depends on how we, the public, react to shiny new AI advances, from ChatGPT and Bing to whatever comes next.
It’s so easy to get seduced by these technologies. They feel like magic. You put in a prompt; the oracle replies. There’s a natural impulse to ooh and aah. But at the rate things are going now, we may be oohing and aahing our way to a future no one wants.
Pride’s Angel and Successor impress -
Granpar, Last Wish and Prophecy excel -
Data Point | On the rise: Indian women cricketers are closing the gap with men in T20 cricket - From their debut to the Women’s Premier League, the Indian women’s cricket team has come a long way in their T20 journey, narrowing the gap with men’s cricket
NZ vs SL, 2nd test | New Zealand beats Sri Lanka by an innings to take series 2-0 - New Zealand has claimed Sri Lanka’s final wicket a few minutes from stumps to complete an innings and 58 run win in the second cricket test and a 2-0 sweep of the series
Morning Digest | Khalistani protestors take down tricolour, attempt to storm High Commission in London; Japan PM Kishida’s agenda in Delhi, and more - Here’s a select list of stories to read before you start your day
An individual’s comment cannot be taken as Christian community’s stand: Govindan -
Move to open Thanneermukkom bund shutters as per crop calendar comes a cropper - Irrigation department usually closes shutters by December 15 every year to prevent intrusion of saline water into Kuttanad
Milma chairman calls for new ideas to boost milk production - Production cost must be curtailed amid efforts to improve the quality of milk, he says as method to attract more investors in the sector
Pro-Khalistani protesters attack Indian Consulate in U.S. - Community leader Ajay Bhutoria strongly condemned the attack by pro-Khalistan protesters on the Consulate of India building in San Francisco
New production unit of Hilly Aqua to come up in Kozhikode - Hilly Aqua, the bottled water brand of the Kerala Irrigation Infrastructure Development Corporation (KIIDC), will have a new production unit at Peruvannamuzhi in Kozhikode district to improve distribution in the northern districts.
UK banking system ‘safe’ after Credit Suisse rescue - Despite the swift action by regulators, stock markets in the UK and Asia fell.
Credit Suisse: Bank rescue damages Switzerland’s reputation for stability - Beset by scandals and crisis, many people are questioning how a totemic bank ended up beyond repair.
France pension reform: Macron’s government faces no confidence vote - They come after a pension reform bill was forced through parliament last Thursday without a vote.
French pension reforms: Is Macron’s government doomed by crisis? - No-confidence motions face the Macron government as it tries to force its unpopular changes into law.
Putin in Mariupol: What the Russian president saw on his visit - The Russian leader tours parts of the Ukrainian port city that saw some of his army’s fiercest attacks.
Fighting VPN criminalization should be Big Tech’s top priority, activists say - Iranian authorities increasingly targeting VPNs is part of a global trend. - link
North Sea cod are getting smaller—can we reverse that? - Fishing wreaks havoc on North Sea cod evolution; long-term planning can help. - link
Google won’t honor medical leave during its layoffs, outraging employees - Ex-Googler says she was laid off from her hospital bed shortly after giving birth. - link
Anthropic introduces Claude, a “more steerable” AI competitor to ChatGPT - Anthropic aims for “safer” and “less harmful” AI, but at a higher price. - link
Bent nails at Roman burial site form “magical barrier” to keep dead from rising - Cremated remains were also covered in brick tiles and a thick layer of lime. - link
A third grade teacher had her students ask their parents to tell them a story with a moral for their homework one day. -
The next day, the kids came back and one by one, began to tell their stories. But then the teacher realized that only Katie was left.
“Katie, do you have a story to share?” ’’Yes ma’am… My daddy told me a story about my mom." “OK, let’s hear it,” said the teacher.
“My mom was a Marine pilot in Iraq and her plane got hit.” “She had to bail out over enemy territory and all she had was a flask of whiskey, a pistol, and a survival knife.” “She drank the whiskey on the way down so the bottle wouldn’t break and then her parachute landed her right in the middle of 20 enemy fighters.” “She shot 15 of them with the pistol, until she ran out of bullets, killed four more with the knife, till the blade broke, and then she killed the last one with her bare hands.”
’’Oh my!" said the horrified teacher. “What did your daddy tell you was the moral to this story?”
“Stay away from Mommy when she’s drunk!!!”
submitted by /u/Electrical_Swan1842
[link] [comments]
Heisenberg, Schrödinger, and Ohm are on a road trip… -
Heisenberg, Schrödinger, and Ohm are on a road trip, and they get pulled over. Heisenberg is driving and the cop asks him, “Do you know how fast you were going?”
“No, but I know exactly where I am” Heisenberg replies.
The cop says “You were going 80 miles an hour.” Heisenberg throws up his hands and shouts “Great! Now I’m lost!”
The cop thinks this is suspicious and orders him to pop open the trunk. He checks it out and says “Do you know you have a dead cat back here?”
“We do now, asshole!” shouts Schrödinger.
The cop tries to arrest them.
Ohm resists.
submitted by /u/smart-username
[link] [comments]
Aristotle, Plato and Socrates walk into a café during the decline of the greek empire. -
Aristotle, Plato and Socrates walk into a café during the decline of the greek empire. The barista asks each of them why they think the empire is falling.
Aristotle gives a powerful speech about how the empire has failed to live up to it’s telos and deconstructs the very nature of what an empire is. The barista is shocked by Aristotle’s intelligence and wisdom. He thanks him for his answer and asks Plato why he thinks the empire is falling.
Plato too gives a powerful explanation, describing concepts that the barista had never even considered. The Barista thanks him, and acknowledges that Plato is truly very wise. He then asks how Socrates would respond to the question.
Socrates had already started drinking his coffee and his mouth is full so he just gestures to Plato. Plato seems to understand his gesture, and he gives yet another explanation for why the empire is falling, this one even better then before. Plato breaks down concepts that define reality itself, going on a long lecture that inescapably leads to one single explanation. The Barista finally understand every single reason behind the decline of the empire. He is awestruck, as Plato has delivered the most profound words he had ever heard. The barista looks at Socrates, and says “Wow, you are truly the wisest of them all.”
submitted by /u/Zendofrog
[link] [comments]
Why are the pyramids in Egypt? -
Because they are too big to transport to British museums
submitted by /u/WorriedLeading2081
[link] [comments]
An Australian Army Recruit sends home a letter… -
Dear Ma & Pa,
I am well. Hope youse are too. Tell me big brothers Doug and Phil that the Army is better than workin’ on the farm - tell them to get in quick smart before the jobs are all gone! I wuz a bit slow in settling down at first, because ya don’t hafta get outta bed until 6 am. But I like sleeping in now, cuz all ya gotta do before brekky is make ya bed and shine ya boots and clean ya uniform. No cows to milk, no calves to feed, no feed to stack - nothin’!! Ya haz gotta shower though, but its not so bad, coz there’s lotsa hot water and even a light to see what ya doing!
At brekky ya get cereal, fruit and eggs but there’s no kangaroo steaks or possum stew like wot Mum makes. You don’t get fed again until noon and by that time all the city boys are dead because we’ve been on a ’route march’ - geez its only just like walking to the windmill in the back paddock!!
This one will kill me brothers Doug and Phil with laughter. I keep getting medals for shootin’ - dunno why. The bullseye is as big as a possum’s bum and it don’t move and it’s not firing back at ya like the Johnsons did when our big scrubber bull got into their prize cows before the Ekka last year! All ya gotta do is make yourself comfortable and hit the target! You don’t even load your own cartridges, they comes in lil’ boxes, and ya don’t have to steady yourself against the rollbar of the roo shooting truck when you reload!
Sometimes ya gotta wrestle with the city boys and I gotta be real careful coz they break easy - it’s not like fighting with Doug and Phil and Jack and Boori and Steve and Muzza all at once like we do at home after the muster.
Turns out I’m not a bad boxer either and it looks like I’m the best the platoon’s got, and I’ve only been beaten by this one bloke from the Engineers - he’s 6 foot 5 and 15 stone and three pick handles across the shoulders and as ya know I’m only 5 foot 7 and eight stone wringin’ wet, but I fought him till the other blokes carried me off to the boozer.
I can’t complain about the Army - tell the boys to get in quick before word gets around how good it is.
Your loving daughter,
Patricia
submitted by /u/AkaGurGor
[link] [comments]